attachment style
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- Asia > China > Beijing > Beijing (0.04)
- Asia > Singapore (0.04)
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Public Health (0.97)
- Information Technology (0.93)
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- Asia > China > Beijing > Beijing (0.04)
- Asia > Singapore (0.04)
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Public Health (0.97)
- Information Technology (0.93)
- Leisure & Entertainment (0.68)
The Language of Attachment: Modeling Attachment Dynamics in Psychotherapy
Bredgaard, Frederik, Trinhammer, Martin Lund, Bassignana, Elisa
The delivery of mental healthcare through psychotherapy stands to benefit immensely from developments within Natural Language Processing (NLP), in particular through the automatic identification of patient specific qualities, such as attachment style. Currently, the assessment of attachment style is performed manually using the Patient Attachment Coding System (PACS; Talia et al., 2017), which is complex, resource-consuming and requires extensive training. To enable wide and scalable adoption of attachment informed treatment and research, we propose the first exploratory analysis into automatically assessing patient attachment style from psychotherapy transcripts using NLP classification models. We further analyze the results and discuss the implications of using automated tools for this purpose -- e.g., confusing `preoccupied' patients with `avoidant' likely has a more negative impact on therapy outcomes with respect to other mislabeling. Our work opens an avenue of research enabling more personalized psychotherapy and more targeted research into the mechanisms of psychotherapy through advancements in NLP.
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- North America > United States (0.04)
- Asia > Middle East > Saudi Arabia > Asir Province > Abha (0.04)
- Asia > Middle East > Jordan (0.04)
- Research Report > New Finding (0.68)
- Research Report > Experimental Study (0.67)
Chatting Up Attachment: Using LLMs to Predict Adult Bonds
Soares, Paulo, McCurdy, Sean, Gerber, Andrew J., Fonagy, Peter
Obtaining data in the medical field is challenging, making the adoption of AI technology within the space slow and high-risk. We evaluate whether we can overcome this obstacle with synthetic data generated by large language models (LLMs). In particular, we use GPT-4 and Claude 3 Opus to create agents that simulate adults with varying profiles, childhood memories, and attachment styles. These agents participate in simulated Adult Attachment Interviews (AAI), and we use their responses to train models for predicting their underlying attachment styles. We evaluate our models using a transcript dataset from 9 humans who underwent the same interview protocol, analyzed and labeled by mental health professionals. Our findings indicate that training the models using only synthetic data achieves performance comparable to training the models on human data. Additionally, while the raw embeddings from synthetic answers occupy a distinct space compared to those from real human responses, the introduction of unlabeled human data and a simple standardization allows for a closer alignment of these representations. This adjustment is supported by qualitative analyses and is reflected in the enhanced predictive accuracy of the standardized embeddings.
- North America > United States > California > Alameda County > Berkeley (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (4 more...)
- Personal > Interview (1.00)
- Research Report > New Finding (0.66)
Benchmarking Foundation Models with Language-Model-as-an-Examiner
Bai, Yushi, Ying, Jiahao, Cao, Yixin, Lv, Xin, He, Yuze, Wang, Xiaozhi, Yu, Jifan, Zeng, Kaisheng, Xiao, Yijia, Lyu, Haozhe, Zhang, Jiayin, Li, Juanzi, Hou, Lei
Numerous benchmarks have been established to assess the performance of foundation models on open-ended question answering, which serves as a comprehensive test of a model's ability to understand and generate language in a manner similar to humans. Most of these works focus on proposing new datasets, however, we see two main issues within previous benchmarking pipelines, namely testing leakage and evaluation automation. In this paper, we propose a novel benchmarking framework, Language-Model-as-an-Examiner, where the LM serves as a knowledgeable examiner that formulates questions based on its knowledge and evaluates responses in a reference-free manner. Our framework allows for effortless extensibility as various LMs can be adopted as the examiner, and the questions can be constantly updated given more diverse trigger topics. For a more comprehensive and equitable evaluation, we devise three strategies: (1) We instruct the LM examiner to generate questions across a multitude of domains to probe for a broad acquisition, and raise follow-up questions to engage in a more in-depth assessment. (2) Upon evaluation, the examiner combines both scoring and ranking measurements, providing a reliable result as it aligns closely with human annotations. (3) We additionally propose a decentralized Peer-examination method to address the biases in a single examiner. Our data and benchmarking results are available at: http://lmexam.xlore.cn.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- Asia > China > Beijing > Beijing (0.04)
- Asia > Singapore (0.04)
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Public Health (0.97)
- Leisure & Entertainment (0.68)
- Information Technology > Security & Privacy (0.67)
The Internet Thinks We Don't Know Its Secret. But I Do.
She had lived in a nursing home for 10 years, and communicated with her sister, and the world, through Alexa. Two days after Lou Ann died of complications from coronavirus, her sister found recordings of Lou Ann's voice asking Alexa, "How do I get help?" Maybe you are reading this in your bed on your phone wherever you are this morning. I was having what I thought of as a weak stretch in my life, when I didn't have a regular job, and when just deciding what I would do to avoid writing, or having a single thought about my email, was enough to short-circuit me and I would find myself still in pajamas at 5 p.m., pacing and crying, Googling What's wrong with me and waiting until it was OK to go to bed again. In such weak stretches, among the many indulgences I permit myself is the minor suboptimal habit of actually sleeping with my phone. Under the other pillow next to me, where no one sleeps. In other, more robust stretches, my phone spends the night plugged in about a foot away on the nightstand, and I can still reach it if I wake up and want to look at it, but it's tethered. When I let it sleep freely with me, I can turn over while I look at it. I can look at it while I'm lying on my left side, and then I can turn over and look at it while I'm lying on my right side. I just charge it the next day, because it doesn't matter if either of us is ready to go in the morning. On this particular morning I opened my eyes and looked at my phone in the bed next to me, and as I put my hand on it, I said, "I belong to you."
- North America > United States > New York (0.04)
- North America > United States > Michigan > Kent County > Grand Rapids (0.04)
Machine Love
While ML generates much economic value, many of us have problematic relationships with social media and other ML-powered applications. One reason is that ML often optimizes for what we want in the moment, which is easy to quantify but at odds with what is known scientifically about human flourishing. Thus, through its impoverished models of us, ML currently falls far short of its exciting potential, which is for it to help us to reach ours. While there is no consensus on defining human flourishing, from diverse perspectives across psychology, philosophy, and spiritual traditions, love is understood to be one of its primary catalysts. Motivated by this view, this paper explores whether there is a useful conception of love fitting for machines to embody, as historically it has been generative to explore whether a nebulous concept, such as life or intelligence, can be thoughtfully abstracted and reimagined, as in the fields of machine intelligence or artificial life. This paper forwards a candidate conception of machine love, inspired in particular by work in positive psychology and psychotherapy: to provide unconditional support enabling humans to autonomously pursue their own growth and development. Through proof of concept experiments, this paper aims to highlight the need for richer models of human flourishing in ML, provide an example framework through which positive psychology can be combined with ML to realize a rough conception of machine love, and demonstrate that current language models begin to enable embodying qualitative humanistic principles. The conclusion is that though at present ML may often serve to addict, distract, or divide us, an alternative path may be opening up: We may align ML to support our growth, through it helping us to align ourselves towards our highest aspirations.
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
- Research Report > Experimental Study (0.67)
- Research Report > New Finding (0.46)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Media (0.67)
How to Develop Secure Attachment as an Adult
People who have strong attachments can form and maintain close relationships. Discover what secure attachment is and how to change your attachment style as an adult. The ability to form healthy long-term relationships with friends, family, and romantic partners is referred to as secure attachment. In early childhood, secure attachment develops. Primary caregivers must meet a child's needs in infancy and early childhood in order to help the child feel safe; this sense of security aids in the development of a secure attachment.
Trust levels in AI predicted by people's relationship style
A University of Kansas interdisciplinary team led by relationship psychologist Omri Gillath has published a new paper in the journal Computers in Human Behavior showing people's trust in artificial intelligence (AI) is tied to their relationship or attachment style. The research indicates for the first time that people who are anxious about their relationships with humans tend to have less trust in AI as well. Importantly, the research also suggests trust in artificial intelligence can be increased by reminding people of their secure relationships with other humans. Grand View Research estimated the global artificial-intelligence market at $39.9 billion in 2019, projected to expand at a compound annual growth rate of 42.2% from 2020 to 2027. However, lack of trust remains a key obstacle to adopting new artificial intelligence technologies.
Do You Trust Artificial Intelligence?
Artificial intelligence (AI) is everywhere. In a typical day, people likely use AI multiple times without even knowing it: Alexa and Siri, Google Maps, Uber and Lyft, autopilot on commercial flights, spam filters, and smart email categorization (so anyone using Gmail, Yahoo, or Office 365/outlook), mobile check deposits, plagiarism checkers, online searches, personalized recommendations, Facebook, Instagram, and Pinterest are all examples of AI. But what happens when people are being introduced to a new AI technology? How likely are they to trust the new technology? With an interdisciplinary team of researchers from the University of Kansas, we set to find out.
- Information Technology > Services (1.00)
- Transportation > Passenger (0.94)
- Transportation > Ground > Road (0.58)